24 research outputs found

    A Parametric Simplex Algorithm for Linear Vector Optimization Problems

    Get PDF
    In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (Evans-Steuer) algorithm [12]. Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Loehne [16], that is, it finds a subset of efficient solutions that allows to generate the whole frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczynski and Vanderbei [21]. The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algorithm with an objective space based LVOP algorithm (Benson algorithm in [13]), that also provides a solution in the sense of [16], and with Evans-Steuer algorithm [12]. The results show that for non-degenerate problems the proposed algorithm outperforms Benson algorithm and is on par with Evan-Steuer algorithm. For highly degenerate problems Benson's algorithm [13] excels the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than Evans-Steuer algorithm.Comment: 27 pages, 4 figures, 5 table

    Primal and Dual Approximation Algorithms for Convex Vector Optimization Problems

    Full text link
    Two approximation algorithms for solving convex vector optimization problems (CVOPs) are provided. Both algorithms solve the CVOP and its geometric dual problem simultaneously. The first algorithm is an extension of Benson's outer approximation algorithm, and the second one is a dual variant of it. Both algorithms provide an inner as well as an outer approximation of the (upper and lower) images. Only one scalar convex program has to be solved in each iteration. We allow objective and constraint functions that are not necessarily differentiable, allow solid pointed polyhedral ordering cones, and relate the approximations to an appropriate \epsilon-solution concept. Numerical examples are provided

    Computing recession cone of a convex upper image via convex projection

    Full text link
    It is possible to solve unbounded convex vector optimization problems (CVOPs) in two phases: (1) computing or approximating the recession cone of the upper image and (2) solving the equivalent bounded CVOP where the ordering cone is extended based on the first phase (Wagner et al., 2023). In this paper, we consider unbounded CVOPs and propose an alternative solution methodology to compute or approximate the recession cone of the upper image. In particular, we relate the dual of the recession cone with the Lagrange dual of weighted sum scalarization problems whenever the dual problem can be written explicitly. Computing this set requires solving a convex (or polyhedral) projection problem. We show that this methodology can be applied to semidefinite, quadratic and linear vector optimization problems and provide some numerical examples

    Outer approximation algorithms for convex vector optimization problems

    Full text link
    In this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum 'distance' from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyze the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of solution set. We observe that the proposed variants have satisfactory results especially in terms of runtime compared to the existing variants from the literature

    Dividend optimization for a jump diffusion model

    Get PDF
    We consider a dividend optimization problem where the objective is to maximize the expected value of total dividends paid during the lifetime of a company. The capital process is assumed to be a jump-diffusion, and dividends are paid out continuously until the capital process hits a default barrier. At any time, the company may distribute dividends at full rate; however, this would bring the capital process closer to the ruin barrier. Hence, we need to find a strategy (from a given admissible set) that will resolve this trade-off optimally. Here, we show that the structure of the optimal policy depends on the parameters of the problem. We identify an optimal policy for different cases, and we show how to compute the value function of the problem

    An Iterative Vertex Enumeration Method for Objective Space Based Vector Optimization Algorithms

    Full text link
    An application area of vertex enumeration problem (VEP) is the usage within objective space based linear/convex {vector} optimization algorithms whose aim is to generate (an approximation of) the Pareto frontier. In such algorithms, VEP, which is defined in the objective space, is solved in each iteration and it has a special structure. Namely, the recession cone of the polyhedron to be generated is the {ordering} cone. We {consider and give a detailed description of} a vertex enumeration procedure, which iterates by calling a modified `double description (DD) method' that works for such unbounded polyhedrons. We employ this procedure as a function of an existing objective space based {vector} optimization algorithm (Algorithm 1); and test the performance of it for randomly generated linear multiobjective optimization problems. We compare the efficiency of this procedure with another existing DD method as well as with the current vertex enumeration subroutine of Algorithm 1. We observe that the modified procedure excels the others especially as the dimension of the vertex enumeration problem (the number of objectives of the corresponding multiobjective problem) increases

    Algorithms for DC Programming via Polyhedral Approximations of Convex Functions

    Full text link
    There is an existing exact algorithm that solves DC programming problems if one component of the DC function is polyhedral convex (Loehne, Wagner, 2017). Motivated by this, first, we consider two cutting-plane algorithms for generating an ϵ\epsilon-polyhedral underestimator of a convex function g. The algorithms start with a polyhedral underestimator of g and the epigraph of the current underestimator is intersected with either a single halfspace (Algorithm 1) or with possibly multiple halfspaces (Algorithm 2) in each iteration to obtain a better approximation. We prove the correctness and finiteness of both algorithms, establish the convergence rate of Algorithm 1, and show that after obtaining an ϵ\epsilon-polyhedral underestimator of the first component of a DC function, the algorithm from (Loehne, Wagner, 2017) can be applied to compute an ϵ\epsilon solution of the DC programming problem without further computational effort. We then propose an algorithm (Algorithm 3) for solving DC programming problems by iteratively generating a (not necessarily ϵ\epsilon-) polyhedral underestimator of g. We prove that Algorithm 3 stops after finitely many iterations and it returns an ϵ\epsilon-solution to the DC programming problem. Moreover, the sequence {xk}k0outputtedbyAlgorithm3convergestoaglobalminimizeroftheDCproblemwhen\{x_k\}_{k\geq 0} outputted by Algorithm 3 converges to a global minimizer of the DC problem when \epsilon$ is set to zero. Computational results based on some test instances from the literature are provided

    Convergence analysis of a norm minimization-based convex vector optimization algorithm

    Full text link
    In this work, we propose an outer approximation algorithm for solving bounded convex vector optimization problems (CVOPs). The scalarization model solved iteratively within the algorithm is a modification of the norm-minimizing scalarization proposed in Ararat et al. (2022). For a predetermined tolerance ϵ>0\epsilon>0, we prove that the algorithm terminates after finitely many iterations, and it returns a polyhedral outer approximation to the upper image of the CVOP such that the Hausdorff distance between the two is less than ϵ\epsilon. We show that for an arbitrary norm used in the scalarization models, the approximation error after kk iterations decreases by the order of O(k1/(1q))\mathcal{O}(k^{{1}/{(1-q)}}), where qq is the dimension of the objective space. An improved convergence rate of O(k2/(1q))\mathcal{O}(k^{{2}/{(1-q)}}) is proved for the special case of using the Euclidean norm
    corecore